Added new params for textInference#287
Conversation
|
Important Review skippedAuto reviews are disabled on this repository. To trigger a review, include ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Plus Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches✨ Simplify code
Comment |
There was a problem hiding this comment.
Pull request overview
This PR extends the text-inference request surface in the SDK by adding new request parameters and preprocessing for an additional input type. It fits into the existing runware.types request-model layer and the runware.base request-building/sending flow for text inference.
Changes:
- Added new text-inference settings/types for cache control and document inputs.
- Updated
ITextInferenceToolto accept new schema/type-related fields and custom serialization. - Added document preprocessing in the non-stream text request path.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
runware/types.py |
Adds new text-inference request models/fields (cache, documents) and changes tool initialization/serialization behavior. |
runware/base.py |
Processes inputs.documents before building non-stream text inference requests. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
Comments suppressed due to low confidence (1)
runware/utils.py:56
Environment.PRODUCTIONWebSocket base URL was changed towss://ws-api.runware.dev/v1, which looks like a non-production endpoint and also no longer matches the documented production endpoint (wss://ws-api.runware.ai/v1) or the production HTTP base URL (https://api.runware.ai/v1). This will cause default SDK connections (and HTTP streaming URL mapping via_WS_TO_HTTP) to target the wrong host.
BASE_RUNWARE_URLS = {
Environment.PRODUCTION: "wss://ws-api.runware.dev/v1",
Environment.TEST: "ws://localhost:8080",
}
# HTTP REST base URL for streaming (e.g. textInference with deliveryMethod=stream)
BASE_RUNWARE_HTTP_URLS = {
Environment.PRODUCTION: "https://api.runware.ai/v1",
Environment.TEST: "http://localhost:8080",
}
# Map each WebSocket base URL to its HTTP counterpart (for streaming requests).
_WS_TO_HTTP = {
BASE_RUNWARE_URLS[Environment.PRODUCTION]: BASE_RUNWARE_HTTP_URLS[Environment.PRODUCTION],
BASE_RUNWARE_URLS[Environment.TEST]: BASE_RUNWARE_HTTP_URLS[Environment.TEST],
}
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
Comments suppressed due to low confidence (1)
runware/utils.py:49
Environment.PRODUCTIONWebSocket base URL was changed towss://ws-api.runware.dev/v1, while the production HTTP base URL remainshttps://api.runware.ai/v1. This likely breaks production connectivity (and creates an inconsistent WS↔HTTP mapping in_WS_TO_HTTP). If this isn't intentional, revert to the.aiendpoint; if it is intentional, the HTTP base URL and docs/config should be updated consistently.
BASE_RUNWARE_URLS = {
Environment.PRODUCTION: "wss://ws-api.runware.dev/v1",
Environment.TEST: "ws://localhost:8080",
}
# HTTP REST base URL for streaming (e.g. textInference with deliveryMethod=stream)
BASE_RUNWARE_HTTP_URLS = {
Environment.PRODUCTION: "https://api.runware.ai/v1",
Environment.TEST: "http://localhost:8080",
|
|
||
| BASE_RUNWARE_URLS = { | ||
| Environment.PRODUCTION: "wss://ws-api.runware.ai/v1", | ||
| Environment.PRODUCTION: "wss://ws-api.runware.dev/v1", |
Added
TextInferenceCacheScope = Literal["system", "system+history"]TextInferenceCacheTtl = Literal["5m", "1h"]ITextInferenceCachedataclass (scope,ttl) with request keycacheITextInputsnow includes:documents: Optional[List[Union[str, File]]]ITextInferenceToolnow supports:schema: Optional[Dict[str, Any]]input_schemaalias (serialized asschemawhenschemais not provided)toolType(serialized astype)ITextInferenceToolChoicerequest serialization support:request_key = "toolChoice"toolType -> typemapping inserialize()Changed
toolChoicemoved fromISettingsto the rootITextInferencepayload:ITextInference.toolChoice: Optional[Union[ITextInferenceToolChoice, Dict[str, Any]]]_buildTextRequest()now includes root-leveltoolChoicerunware/base.pyvia_processTextInputs()and applied it to both_requestText()and_requestTextStream(), soimages/videos/documentsare normalized consistently for sync/async/stream text requests.